Moving least-squares approximations for linearly-solvable stochastic optimal control problems
نویسندگان
چکیده
Nonlinear stochastic optimal control problems are fundamental in control theory. A general class of such problems can be reduced to computing the principal eigenfunction of a linear operator. Here, we describe a new method for finding this eigenfunction using a moving least-squares function approximation. We use efficient iterative solvers that do not require matrix factorization, thereby allowing us to handle large numbers of basis functions. The bases are evaluated at collocation states that change over iterations of the algorithm, so as to provide higher resolution at the regions of state space that are visited more often. The shape of the bases is automatically defined given the collocation states, in a way that avoids gaps in the coverage. Numerical results on test problems are provided.
منابع مشابه
Optimal Pareto Parametric Analysis of Two Dimensional Steady-State Heat Conduction Problems by MLPG Method
Numerical solutions obtained by the Meshless Local Petrov-Galerkin (MLPG) method are presented for two dimensional steady-state heat conduction problems. The MLPG method is a truly meshless approach, and neither the nodal connectivity nor the background mesh is required for solving the initial-boundary-value problem. The penalty method is adopted to efficiently enforce the essential boundary co...
متن کاملLinearly Solvable Optimal Control
We summarize the recently-developed framework of linearly-solvable stochastic optimal control. Using an exponential transformation, the (Hamilton-Jacobi) Bellman equation for such problems can bemade linear, giving rise to efficient numericalmethods. Extensions to game theory are also possible and lead to linear Isaacs equations. The key restriction that makes a stochastic optimal control probl...
متن کاملLinearly Solvable Stochastic Control Lyapunov Functions
This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton–Jacobi–Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear ...
متن کاملA Unified Theory of Linearly Solvable Optimal Control
We present a unified theory of Linearly Solvable Optimal Control, that is, a class of optimal control problems whose solution reduces to solving a linear equation (for finite state spaces) or a linear integral equation (for continuous state spaces). The framework presented includes all previous work on linearly solvable optimal control as special cases. It includes both standard control problem...
متن کاملNonlinear State Dynamics, Computational Approximations and Multistage Manufacturing System Application
In this paper, we treat stochastic optimal control problems that are nonlinear in the state, but otherwise are a Linear-Quadratic-Gaussian-Poisson problem in the control (LQGP/U), such that the dynamics are linear in the control with quadratic costs in control. The uncertainty in the environment is modeled by Gaussian noise for continuous background uctuations and discrete random jumps by Pois-...
متن کامل